Skip to content

docs: add mathematical formulas to metric docstrings#13

Merged
jeffcarp merged 1 commit intogoogle:mainfrom
nikolasavic3:main
Feb 20, 2025
Merged

docs: add mathematical formulas to metric docstrings#13
jeffcarp merged 1 commit intogoogle:mainfrom
nikolasavic3:main

Conversation

@nikolasavic3
Copy link
Contributor

@nikolasavic3 nikolasavic3 commented Feb 16, 2025

This PR adds mathematical definitions to the metric class docstrings using Sphinx LaTeX directives.

I am attaching a pdf of the website to show how the documentation renders the math formulas.
metrax — metrax documentation.pdf

Fixes #11

@jeffcarp jeffcarp self-requested a review February 18, 2025 18:48
Copy link
Collaborator

@jeffcarp jeffcarp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for the contribution @nikolasavic3! Left some nits, otherwise LGTM

class MSE(clu_metrics.Average):
"""Computes the mean squared error for regression problems given `predictions` and `labels`."""
r"""Computes the mean squared error for regression problems given `predictions` and `labels`.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: could you remove any leading whitespace in the docstrings (on this line and any below)?

i.e.:
image

The Precision-Recall curve shows the tradeoff between precision and recall at different
classification thresholds. The area under this curve (AUC-PR) provides a single score
that represents the model's ability to distinguish between classes across all possible
thresholds.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this have slightly more detail to distinguish it from the docstring in AUCROC? Maybe like:

... provides a single score that represents the model's ability to identify positive cases across all possible classification thresholds, particularly in imbalanced datasets.

The ROC curve shows the tradeoff between the true positive rate (TPR) and false positive
rate (FPR) at different classification thresholds. The area under this curve (AUC-ROC)
provides a single score that represents the model's ability to distinguish between classes
across all possible thresholds.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this one, maybe an extra detail:

... represents the model's ability to discriminate between positive and negative cases across all possible classification thresholds, regardless of class imbalance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for reviewing!

@jeffcarp
Copy link
Collaborator

Thanks again for the contribution!

@jeffcarp jeffcarp merged commit 9d50348 into google:main Feb 20, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add eval metric formulas to docs

2 participants